Speaker embedding extractors significantly influence the performance of clustering-based speaker diarisation systems. Conventionally, only one embedding is extracted from each speech segment. However, because of the sliding window approach, a segment easily includes two or more speakers owing to speaker change points. This study proposes a novel embedding extractor architecture, referred to as a high-resolution embedding extractor (HEE), which extracts multiple high-resolution embeddings from each speech segment. Hee consists of a feature-map extractor and an enhancer, where the enhancer with the self-attention mechanism is the key to success. The enhancer of HEE replaces the aggregation process; instead of a global pooling layer, the enhancer combines relative information to each frame via attention leveraging the global context. Extracted dense frame-level embeddings can each represent a speaker. Thus, multiple speakers can be represented by different frame-level features in each segment. We also propose an artificially generating mixture data training framework to train the proposed HEE. Through experiments on five evaluation sets, including four public datasets, the proposed HEE demonstrates at least 10% improvement on each evaluation set, except for one dataset, which we analyse that rapid speaker changes less exist.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Post-hoc explanation methods are used with the intent of providing insights about neural networks and are sometimes said to help engender trust in their outputs. However, popular explanations methods have been found to be fragile to minor perturbations of input features or model parameters. Relying on constraint relaxation techniques from non-convex optimization, we develop a method that upper-bounds the largest change an adversary can make to a gradient-based explanation via bounded manipulation of either the input features or model parameters. By propagating a compact input or parameter set as symbolic intervals through the forwards and backwards computations of the neural network we can formally certify the robustness of gradient-based explanations. Our bounds are differentiable, hence we can incorporate provable explanation robustness into neural network training. Empirically, our method surpasses the robustness provided by previous heuristic approaches. We find that our training method is the only method able to learn neural networks with certificates of explanation robustness across all six datasets tested.
translated by 谷歌翻译
Vision Transformer (ViT) extracts the final representation from either class token or an average of all patch tokens, following the architecture of Transformer in Natural Language Processing (NLP) or Convolutional Neural Networks (CNNs) in computer vision. However, studies for the best way of aggregating the patch tokens are still limited to average pooling, while widely-used pooling strategies, such as max and GeM pooling, can be considered. Despite their effectiveness, the existing pooling strategies do not consider the architecture of ViT and the channel-wise difference in the activation maps, aggregating the crucial and trivial channels with the same importance. In this paper, we present Group Generalized Mean (GGeM) pooling as a simple yet powerful pooling strategy for ViT. GGeM divides the channels into groups and computes GeM pooling with a shared pooling parameter per group. As ViT groups the channels via a multi-head attention mechanism, grouping the channels by GGeM leads to lower head-wise dependence while amplifying important channels on the activation maps. Exploiting GGeM shows 0.1%p to 0.7%p performance boosts compared to the baselines and achieves state-of-the-art performance for ViT-Base and ViT-Large models in ImageNet-1K classification task. Moreover, GGeM outperforms the existing pooling strategies on image retrieval and multi-modal representation learning tasks, demonstrating the superiority of GGeM for a variety of tasks. GGeM is a simple algorithm in that only a few lines of code are necessary for implementation.
translated by 谷歌翻译
Modern Deep Learning (DL) models have grown to sizes requiring massive clusters of specialized, high-end nodes to train. Designing such clusters to maximize both performance and utilization to amortize their steep cost is a challenging task requiring careful balance of compute, memory, and network resources. Moreover, a plethora of each model's tuning knobs drastically affect the performance, with optimal values often depending on the underlying cluster's characteristics, which necessitates a complex cluster-workload co-design process. To facilitate the design space exploration of such massive DL training clusters, we introduce COMET a holistic cluster design methodology and workflow to jointly study the impact of parallelization strategies and key cluster resource provisioning on the performance of distributed DL training. We develop a step-by-step process to establish a reusable and flexible methodology, and demonstrate its application with a case study of training a Transformer-1T model on a cluster of variable compute, memory, and network resources. Our case study demonstrates COMET's utility in identifying promising architectural optimization directions and guiding system designers in configuring key model and cluster parameters.
translated by 谷歌翻译
Neural network interpretation methods, particularly feature attribution methods, are known to be fragile with respect to adversarial input perturbations. To address this, several methods for enhancing the local smoothness of the gradient while training have been proposed for attaining \textit{robust} feature attributions. However, the lack of considering the normalization of the attributions, which is essential in their visualizations, has been an obstacle to understanding and improving the robustness of feature attribution methods. In this paper, we provide new insights by taking such normalization into account. First, we show that for every non-negative homogeneous neural network, a naive $\ell_2$-robust criterion for gradients is \textit{not} normalization invariant, which means that two functions with the same normalized gradient can have different values. Second, we formulate a normalization invariant cosine distance-based criterion and derive its upper bound, which gives insight for why simply minimizing the Hessian norm at the input, as has been done in previous work, is not sufficient for attaining robust feature attribution. Finally, we propose to combine both $\ell_2$ and cosine distance-based criteria as regularization terms to leverage the advantages of both in aligning the local gradient. As a result, we experimentally show that models trained with our method produce much more robust interpretations on CIFAR-10 and ImageNet-100 without significantly hurting the accuracy, compared to the recent baselines. To the best of our knowledge, this is the first work to verify the robustness of interpretation on a larger-scale dataset beyond CIFAR-10, thanks to the computational efficiency of our method.
translated by 谷歌翻译
For ensuring vehicle safety, the impact performance of wheels during wheel development must be ensured through a wheel impact test. However, manufacturing and testing a real wheel requires a significant time and money because developing an optimal wheel design requires numerous iterative processes to modify the wheel design and verify the safety performance. Accordingly, wheel impact tests have been replaced by computer simulations such as finite element analysis (FEA); however, it still incurs high computational costs for modeling and analysis, and requires FEA experts. In this study, we present an aluminum road wheel impact performance prediction model based on deep learning that replaces computationally expensive and time-consuming 3D FEA. For this purpose, 2D disk-view wheel image data, 3D wheel voxel data, and barrier mass values used for the wheel impact test were utilized as the inputs to predict the magnitude of the maximum von Mises stress, corresponding location, and the stress distribution of the 2D disk-view. The input data were first compressed into a latent space with a 3D convolutional variational autoencoder (cVAE) and 2D convolutional autoencoder (cAE). Subsequently, the fully connected layers were used to predict the impact performance, and a decoder was used to predict the stress distribution heatmap of the 2D disk-view. The proposed model can replace the impact test in the early wheel-development stage by predicting the impact performance in real-time and can be used without domain knowledge. The time required for the wheel development process can be reduced by using this mechanism.
translated by 谷歌翻译
随着智能建筑应用的增长,住宅建筑中的占用信息变得越来越重要。在智能建筑物的范式的背景下,为了广泛的目的,需要这种信息,包括提高能源效率和乘员舒适性。在这项研究中,使用基于电器技术信息的深度学习实施了住宅建筑中的占用检测。为此,提出了一种新型的智能住宅建筑系统占用方法。通过智能计量系统测量的电器,传感器,光和HVAC的数据集用于模拟。为了对数据集进行分类,使用了支持向量机和自动编码器算法。混淆矩阵用于准确性,精度,召回和F1,以证明所提出的方法在占用检测中的比较性能。拟议的算法使用电器的技术信息达到95.7〜98.4%。为了验证占用检测数据,采用主成分分析和T分布的随机邻居嵌入(T-SNE)算法。通过使用占用检测,智能建筑物中可再生能源系统的功耗降低到11.1〜13.1%。
translated by 谷歌翻译
为了通过使用可再生能源来取代化石燃料,间歇性风能和光伏(PV)功率的资源不平衡是点对点(P2P)功率交易的关键问题。为了解决这个问题,本文介绍了增强学习(RL)技术。对于RL,图形卷积网络(GCN)和双向长期记忆(BI-LSTM)网络由基于合作游戏理论的纳米簇之间的P2P功率交易共同应用于P2P功率交易。柔性且可靠的DC纳米醇适合整合可再生能源以进行分配系统。每个局部纳米粒子群都采用了生产者的位置,同时着重于功率生产和消费。对于纳米级簇的电源管理,使用物联网(IoT)技术将多目标优化应用于每个本地纳米群集群。考虑到风和光伏发电的间歇性特征,进行电动汽车(EV)的充电/排放。 RL算法,例如深Q学习网络(DQN),深度复发Q学习网络(DRQN),BI-DRQN,近端策略优化(PPO),GCN-DQN,GCN-DQN,GCN-DRQN,GCN-DRQN,GCN-BI-DRQN和GCN-PPO用于模拟。因此,合作P2P电力交易系统利用使用时间(TOU)基于关税的电力成本和系统边际价格(SMP)最大化利润,并最大程度地减少电网功耗的量。用P2P电源交易的纳米簇簇的电源管理实时模拟了分配测试馈线,并提议的GCN-PPO技术将纳米糖簇的电量降低了36.7%。
translated by 谷歌翻译
常规生成订单3及以上的数据张量。这些数据收集越来越大且增长。它们要么是张量字段(例如,图像,视频,地理数据),其中每个数据位置包含重要信息或排列不变的一般张量(例如,无监督的潜在空间学习,图形网络分析,建议系统等)。直接访问如此大的数据张量收集以获取信息已变得越来越令人难以置信。我们学习具有分解表示的近似全级和紧凑的张量草图,可提供紧凑的空间,时间和光谱嵌入量的张量场(P-SCT)和一般张量(P-SCT-Permute)。所有后续的信息查询都以高精度进行,在生成草图上进行。我们通过从张量切片的样品有效的子采样量构建张量图来产生任意阶数据张量的最佳级别-r tucker分解。我们的样本有效策略是通过使用与共轭先验的Dirichlet分布的适应性随机汤普森采样来学习的。
translated by 谷歌翻译